Socially Rational Agents
نویسندگان
چکیده
Autonomous agents are designed to carry out problem solving actions in order to achieve given, or self generated, goals. A central aspect of this design is the agent’s decision making function which determines the right actions to perform in a given situation to best achieve the agent’s objectives. Traditionally, this function has been solipsistic in nature and based upon the principle of individual utility maximisation. However we believe that when designing multi-agent systems this may not be the most appropriate choice. Rather we advocate a more social view of rationality which strikes a balance between the needs of the individual and those of the overall system. To this end, we describe a preliminary formulation of social rationality, indicate how the definition can vary depending on resource bounds, and illustrate its use in a fire-fighting scenario. 1. The Case for Social Rationality Rational agents make decisions about which actions to perform at what times in order to best achieve their goals and objectives. The exact nature and the underpinning principles of an agent’s decision making function have been studied in a range of disciplines including philosophy, economics and sociology. This work has demonstrated that the design of the decision making function is the critical determinant of the success of the agent [Doyle 83]. The current predominant view is to equate rational decision making with that of maximizing the expected utility of actions as dictated by decision theory [Horvitz et al. 88] (although see [Castelfranchi and Conte 97] for a critique of this position). This utilitarian view of rationality may be adequate when all that is being considered is the design and success of a single agent. However when designing a system in which multiple agents need to interact (cooperate and coordinate) in order to achieve both individual and system goals, we feel that such a solipsistic view may not be the most appropriate. For the moment we limit our claim to the case in which the designer wishes to build a (complex) system using autonomous agents1. Relevant examples include process control systems, telecommunications management systems, business process management systems, air traffic control systems, and manufacturing systems. Thus we are not talking about pure distributed problem solving systems nor pure multi-agent systems. In the former case, the sole concern of the designer is with the overall performance of the system and the performance of the individual agents is 1. Such systems are closed in the sense that the designer knows precisely which agents are part of the system and that he has control over their decision making functions. Hence exploitation by outside agents is not a concern. not important. In the latter case, the concern of the designer is with the performance of the individual agents and the system level performance is left to emerge out of the interplay between the constituent components. Rather we are concerned with a hybrid case in which the designer wishes to exploit the conceptual power of autonomous agents (as in the multi-agent systems view), but wishes to achieve system level objectives (as in the distributed problem solving case). In this hybrid context, we feel that a view of rationality which considers, and attempts to strike a balance between, both individual and system level goals is both more natural and more likely to succeed2 (cf. the view of the marketbased computing community [Wellman 93]). This more social view of rationality means that an agent has to consider the implications of its choices on other, and sometimes all, agents in the system. That is, agents need to be given a social perspective to aid their decision making and improve their performance [Castelfranchi 90]. Several examples of such social decision making functions have been identified [Cesta et al. 96; Kalenka and Jennings 97]; however we feel that the conceptual foundations of these functions need to be analysed in greater depth. To this end, we start from the following decision making principle [Jennings and Campos 97]: Principle of Social Rationality: If a socially rational agent can perform an action whose joint benefit is greater than its joint loss, then it may select that action. Joint benefit is a combined measure which incorporates the benefit provided to the individual and the benefit afforded to the overall system as a result of an action (mutatis mutandis for joint loss). Although this definition focuses the agent into choosing more beneficial actions from the societal viewpoint, it does not provide concrete guidance in the choice of alternatives nor does it provide a framework for maintaining a balance between the individual and the system needs. Thus, to be applied practically, the definition needs to be expanded in these directions. Due to its intuitive and formal treatment of making decisions from a set of alternatives under uncertainty, we will use the notion of expected utility of actions in deriving a more descriptive notion of choice within a multi-agent environment. From the aforementioned principle of social rationality, to calculate the expected utility (EU) of an action α, an agent needs to combine (using some function f) the 2. Whilst it would be possible to manipulate the agent’s individual utility (or goals) so that it incorporates a measure of social awareness, this would simply be hiding the underlying social principles behind the numbers. Socially Rational Agents L. M. Hogg and N. R. Jennings Department of Electronic Engineering, Queen Mary & Westfield College, University of London, London E1 4NS, UK. {L.M.Hogg, N.R.Jennings}@qmw.ac.uk From: AAAI Technical Report FS-97-02. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.
منابع مشابه
Modelling Socially Intelligent Agents in Organisations
Some work on modelling boundedly rational agents in organisations is described. It is then argued that social intelligence is not merely intelligence plus interaction but should allow for individual relationships to develop between agents. This means that, at least, agents must be able to distinguish, identify, model and address other agents, either individually or in groups; in other words tha...
متن کاملCan a Rational Agent Afford to be Affectless? A Formal Approach
In this article, we expose some of the issues raised by the critics of the neoclassical approach to rational agent modeling and we propose a formal approach for the design of arti cial rational agents that includes some of the functions of emotions found in the human system. We suggest that emotions and rationality are closely linked in the human mind (and in the body, for that matter) and, the...
متن کاملLearning to Behave Socially
Our previous work introduced a methodology for synthesizing and analyzing basic behaviors which served as a substrate for generating a large repertoire of higher{level group interactions (Matari c 1992, Matari c 1993). In this paper we describe how, given the substrate, agents can learn to behave socially, i.e. to maximize average individual by maximizing collective beneet. While this is a well...
متن کاملSocially Rational Models for Autonomous Agents
Autonomous multi-agent systems that are to coordinate must be designed according to models that accommodate such complex social behavior as compromise, negotiation, and altruism. In contrast to individually rational models, where each agent seeks to maximize its own welfare without regard for others, socially rational agents have interests beyond themselves. Such models require a new type of ut...
متن کاملDelegations guided by trust and autonomy
This paper explores delegation decisions predicated on models of trust and autonomy among agents. In socially rich environments, trust and autonomy of artificial agents are key attributes for rational delegation decisions. Social agents are affected by many social attributes such as benevolence, social exchanges, power, and norms. We present cognitively inspired working models of trust and auto...
متن کاملMultiagent Graph Coloring: Pareto Efficiency, Fairness and Individual Rationality
We consider a multiagent extension of single-agent graph coloring. Multiple agents hold disjoint autonomous subgraphs of a global graph, and every color used by the agents in coloring the graph has associated cost. In this multiagent graph coloring scenario, we seek a minimum legal coloring of the global graph’s vertices, such that the coloring is also Pareto efficient, socially fair, and indiv...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1997